首页> 外文OA文献 >Reinforcement learning versus model predictive control: a comparison on a power system problem
【2h】

Reinforcement learning versus model predictive control: a comparison on a power system problem

机译:强化学习与模型预测控制:电力系统问题的比较

代理获取
本网站仅为用户提供外文OA文献查询和代理获取服务,本网站没有原文。下单后我们将采用程序或人工为您竭诚获取高质量的原文,但由于OA文献来源多样且变更频繁,仍可能出现获取不到、文献不完整或与标题不符等情况,如果获取不到我们将提供退款服务。请知悉。

摘要

This paper compares reinforcement learning (RL) with model predictive control (MPC) in a unified framework and reports experimental results of their application to the synthesis of a controller for a nonlinear and deterministic electrical power oscillations damping problem. Both families of methods are based on the formulation of the control problem as a discrete-time optimal control problem. The considered MPC approach exploits an analytical model of the system dynamics and cost function and computes open-loop policies by applying an interior-point solver to a minimization problem in which the system dynamics are represented by equality constraints. The considered RL approach infers in a model-free way closed-loop policies from a set of system trajectories and instantaneous cost values by solving a sequence of batch-mode supervised learning problems. The results obtained provide insight into the pros and cons of the two approaches and show that RL may certainly be competitive with MPC even in contexts where a good deterministic system model is available.
机译:本文在一个统一的框架中将强化学习(RL)与模型预测控制(MPC)进行了比较,并报告了它们在非线性和确定性电力振荡阻尼控制器的合成中的应用实验结果。两种方法都基于将控制问题表述为离散时间最优控制问题。考虑的MPC方法利用系统动力学和成本函数的分析模型,并通过将内点求解器应用于最小化问题来计算开环策略,在最小化问题中,系统动力学由相等约束表示。所考虑的RL方法通过解决一系列批处理有监督学习问题,以无模型的方式从一组系统轨迹和瞬时成本值中推断出闭环策略。获得的结果提供了对这两种方法的利弊的洞察力,并表明即使在具有良好确定性系统模型的情况下,RL肯定也可以与MPC竞争。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
代理获取

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号